Begin typing your search...

L'affaire OpenAI sparks fears of AI apocalypse sans regulation

The OpenAI debacle has proved one thing: The AI industry needs to be regulated, and fast.

image for illustrative purpose

Laffaire OpenAI sparks fears of AI apocalypse sans regulation
X

25 Nov 2023 4:39 PM IST

New Delhi, Nov 25: The OpenAI debacle has proved one thing: The AI industry needs to be regulated, and fast.

The six days of chaos at Sam Altman-run ChatGPT maker has revealed serious lapses in the company’s self-governance, amid worries over AI as an existential risk to humanity.

As the world seeks answers from OpenAI on why they sacked Altman in the first place, reports have emerged that a secret AI project named 'Q' (pronounced Q-Star) that could threaten humanity may have been the reason behind Altman's dramatic ouster as CEO.

Several staff researchers reportedly sent the OpenAI board a letter warning that a powerful AI breakthrough could threaten humanity.

The letter and AI algorithm was a catalyst that caused the board to oust Altman, according to a Reuters report, citing sources.

The previously unknown letter was one of the factors "among a longer list of grievances by the board that led to Altman's firing".

The researchers who wrote the letter did not comment, neither did OpenAI, so the mystery shrouds the top-secret AI project.

The ChatGPT maker made progress on the 'Q-Star' project which could be a breakthrough in the search for superintelligence, also known as artificial general intelligence (AGI).

“What I know for certain is we don't have AGI. I know with certainty there was a colossal failure of governance,” David Shrier, professor of practice, AI and innovation at the Imperial College Business School in London, told The Wired.

Regulators around the world will carefully be watching what happens at OpenAI next. The failure of OpenAI’s governance structure is likely to amplify calls for stronger public oversight.

In 2019, OpenAI transitioned from a non-profit to a 'capped-profit' model.

According to the company’s blog post, OpenAI wanted to increase its ability to raise capital while still serving its mission, and “no pre-existing legal structure they knew of struck the right balance”.

OpenAI came up with a novel structure which allowed the non-profit to control the direction of a for-profit entity while providing the investors a "capped" upside of 100x.

OpenAI’s board of directors has undergone numerous changes since its inception. Elon Musk resigned from his board seat in 2018, citing a “potential future conflict of interest” with Tesla’s AI development for driverless cars.

“Musk later expressed disappointment over the company’s for-profit motivations and dealings with Microsoft. Since Musk’s departure, a number of other board members have left the company, including former congressman Will Hurd who cited a Presidential bid, LinkedIn co-founder Reid Hoffman over an investment conflict, and Neuralink director Shivon Zilis,” according to leading venture capitalist Chamath Palihapitiya.

The turmoil at OpenAI will calm down as a new three-man board takes over. An independent investigation into the events around Altman's firing could strengthen Altman's position if it concludes that he did no wrong.

“That means the ultimate authority over OpenAI's fate, which used to rest with six people, now rests with three -- D'Angelo; new board chair Bret Taylor, former co-CEO of Salesforce; and former Treasury Secretary Lawrence Summers,” Axios reported.

OpenAI ChatGPT 
Next Story
Share it